Goto

Collaborating Authors

 white male


Fairness Is Not Enough: Auditing Competence and Intersectional Bias in AI-powered Resume Screening

Webster, Kevin T

arXiv.org Artificial Intelligence

The increasing use of generative AI for resume screening is predicated on the assumption that it offers an unbiased alternative to biased human decision-making. However, this belief fails to address a critical question: are these AI systems fundamentally competent at the evaluative tasks they are meant to perform? This study investigates the question of competence through a two-part audit of eight major AI platforms. Experiment 1 confirmed complex, contextual racial and gender biases, with some models penalizing candidates merely for the presence of demographic signals. Experiment 2, which evaluated core competence, provided a critical insight: some models that appeared unbiased were, in fact, incapable of performing a substantive evaluation, relying instead on superficial keyword matching. This paper introduces the "Illusion of Neutrality" to describe this phenomenon, where an apparent lack of bias is merely a symptom of a model's inability to make meaningful judgments. This study recommends that organizations and regulators adopt a dual-validation framework, auditing AI hiring tools for both demographic bias and demonstrable competence to ensure they are both equitable and effective.


Government watchdog asks Missouri AG to investigate MSU business boot camp that excluded White males

FOX News

Fox News contributor Joe Concha joins'Fox & Friends First' to discuss Elon Musk's warning that artificial intelligence could threaten elections and his concerns on the declining birth rate. A government watchdog group Tuesday requested that Missouri Attorney General Andrew Bailey investigate a tax-payer-funded Missouri State University business boot program that excluded White males. In a letter to Bailey's office, the Equal Protection Project (EPP) alleged that Missouri State University (MSU) was "engaging in racial- and gender-based discrimination through its sponsorship, promotion, and hosting of a small business training'boot camp' that limits participation" to women and people who identify as "BIPOC" – an acronym for non-white "Black, Indigenous and Persons of Color." MSU began accepting applications for the Spring 2023 Early-Stage Business Boot Camp program in late November. The university said the program was for "aspiring or current BIPOC and/or women small business owners who have recently started or are in the idea phase" and live in Southern Missouri.


Ethical AI: Demographic Bias in Facial Recognition Technology

#artificialintelligence

There is a tremendous amount of misleading and inaccurate reporting on the topic of demographic bias in biometric identification systems, especially regarding facial recognition technology. Part of the problem is that there isn't one thing that is "facial recognition technology". At the core of any biometric system is a matching algorithm. The definitive resource on the topic of demographic bias in biometrics is the NIST Face Recognition Vendor Test (FRVT) Part 3 Demographic Effects report. Warning: this 82-page report is not an easy read and you really should read parts 1 and 2 first to get the context.


Is AI sexist and racist?

#artificialintelligence

We all use facial recognition to unlock our phones. And we all view online content automatically suggested to us. But some of us have rather more success with artificial intelligence (AI) than others. A study of face recognition AIs discovered that systems from leading companies IBM, Microsoft and Amazon misclassified the faces of Oprah Winfrey, Michelle Obama and Serena Williams, while having no trouble at all with white males. Even the voices of digital assistants such as Cortana or Google Assistant have female voices by default, perhaps unconsciously reinforcing the stereotype of female subservience in the minds of millions of users.


The Path to Ethical AI Starts With Collaboration

#artificialintelligence

To the layman, the word-set of ethical AI is a misnomer. AI oftentimes still conjures visions of a dystopian future in which artificial intelligence runs rampant, dominating humankind. Thanks to modern-era entertainment in films such as 2001:A Space Odyssey (HAL 3000) or The Terminator, public perception of AI has been limited to these fictional depictions. So it should come as no surprise that when we talk about ethical AI people would assume its inverse involves robots, lasers, and a war to end humanity. In truth, the conversation around ethical AI typically boils down to the societal issues such as data collection, cyberattacks on critical infrastructure, and inherent bias in code.


VIDEO: The REAL Problems with AI – My Talk with Kate Crawford

#artificialintelligence

In the world of AI and machine learning, it's common to hear fear-based statements in the media, or from your neighbors, about what the poor decisions unchecked algorithms might make -- everything from denying you credit to launching nuclear missiles. What most people rarely hear about are the actual challenges that cause AI practitioners to worry. At FICO World 2019 in New York, I sat down with Kate Crawford to discuss these kinds of problems with AI. Kate is a Distinguished Research Professor at NYU and a Principal Researcher at Microsoft Research, as well as the co-founder of AI Now. We found a lot of common ground as we explored data bias, untrained data scientists and other concerns.


Robot Art Critics Are Rolling into a Museum Near You

#artificialintelligence

With a black bowler hat and a chiffon white scarf, Berenson certainly looks the part of a stuffy art connoisseur -- so long as you ignore the neural network poking out from his suit. Meet the Art Critic 2.0, built from gleaming metal and sleek sensors, with equal parts smarts and snob. The sage art critic once commanded considerable power in creative spheres, making or breaking an artist's career with a simple smirk of disapproval or a punishing review in next day's paper. But today, as the number of full-time art critics dwindles in newsrooms, a growing force of high-tech art experts is starting to pick up the slack by methodically decoding art's finest details. In Canada, the Roomba-esque kulturBOT snaps photos at exhibitions and uses an algorithm-powered "stream of consciousness" to tweet out the images with often nonsensical captions like "panting with love of danger" or "streaked with the nocturnal vibration."


Robot Art Critics Are Rolling into a Museum Near You

#artificialintelligence

With a black bowler hat and a chiffon white scarf, Berenson certainly looks the part of a stuffy art connoisseur -- so long as you ignore the neural network poking out from his suit. Meet the Art Critic 2.0, built from gleaming metal and sleek sensors, with equal parts smarts and snob. The sage art critic once commanded considerable power in creative spheres, making or breaking an artist's career with a simple smirk of disapproval or a punishing review in next day's paper. But today, as the number of full-time art critics dwindles in newsrooms, a growing force of high-tech art experts is starting to pick up the slack by methodically decoding art's finest details. In Canada, the Roomba-esque kulturBOT snaps photos at exhibitions and uses an algorithm-powered "stream of consciousness" to tweet out the images with often nonsensical captions like "panting with love of danger" or "streaked with the nocturnal vibration."


Blaming video games for school shootings may reflect racist beliefs, study says

Daily Mail - Science & tech

People have long blamed video games as a cause of school shootings, but a new study has found that this is more likely to be the case if the perpetrator is white. Researchers have found that video games are eight times more likely to be mentioned when the perpetrator was a white male than if the shooter were an African American male. Experts believe the public looks to find an explanation for this type of behavior if the act is carried out by someone who doesn't match the racial stereotype of a violent person. Although many politicians and media outlets point to violent video games as the cause of school shootings, experts have yet to find scientific evidence to support these claims. 'Video games are often used by lawmakers and others as a red herring to distract from other potential causes of school shootings,' said lead researcher Patrick Markey, PhD, a psychology professor at Villanova University.


Enterprise AI Trends: Where are We Now and Where are We Going?

#artificialintelligence

Is there more to AI than automation and data processing? And now that some of the more basic functions of AI have matured, we're going to see a sharp increase in the sophistication of AI, as it becomes more and more human. The following is a brief look at where the enterprise is when it comes to AI, and where we're about to start heading. These are the enterprise AI trends making headlines. As I noted above, there are some basic functions of AI that most companies have already adopted, and that we as users can feel confident about when it comes to functionality.